Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Hyperparameter optimization for neural network based on improved real coding genetic algorithm
Wei SHE, Yang LI, Lihong ZHONG, Defeng KONG, Zhao TIAN
Journal of Computer Applications    2024, 44 (3): 671-676.   DOI: 10.11772/j.issn.1001-9081.2023040441
Abstract310)   HTML37)    PDF (1532KB)(381)       Save

To address the problems of poor effects, easily falling into suboptimal solutions, and inefficiency in neural network hyperparameter optimization, an Improved Real Coding Genetic Algorithm (IRCGA) based hyperparameter optimization algorithm for the neural network was proposed, which was named IRCGA-DNN (IRCGA for Deep Neural Network). Firstly, a real-coded form was used to represent the values of hyperparameters, which made the search space of hyperparameters more flexible. Then, a hierarchical proportional selection operator was introduced to enhance the diversity of the solution set. Finally, improved single-point crossover and variational operators were designed to explore the hyperparameter space more thoroughly and improve the efficiency and quality of the optimization algorithm, respectively. Two simulation datasets were used to show IRCGA’s performance in damage effectiveness prediction and convergence efficiency. The experimental results on two datasets indicate that, compared to GA-DNN(Genetic Algorithm for Deep Neural Network), the proposed algorithm reduces the convergence iterations by 8.7% and 13.6% individually, and the MSE (Mean Square Error) is not much different; compared to IGA-DNN(Improved Genetic Algorithm for Deep Neural Network), IRCGA-DNN achieves reductions of 22.2% and 13.6% in convergence iterations respectively. Experimental results show that the proposed algorithm is better in both convergence speed and prediction performance, and is suitable for hyperparametric optimization of neural networks.

Table and Figures | Reference | Related Articles | Metrics
CHAIN: edge computing node placement algorithm based on overlapping domination
Xuyan ZHAO, Yunhe CUI, Chaohui JIANG, Qing QIAN, Guowei SHEN, Chun GUO, Xianchao LI
Journal of Computer Applications    2023, 43 (9): 2812-2818.   DOI: 10.11772/j.issn.1001-9081.2022081250
Abstract165)   HTML8)    PDF (1484KB)(91)       Save

In edge computing, computing resources are deployed at edge computing nodes closer to end users, and selecting the appropriate edge computing node deployment location from the candidate locations can enhance the node capacity and user Quality of Service (QoS) of edge computing services. However, there is less research on how to place edge computing nodes to reduce the cost of edge computing. In addition, there is no edge computing node deployment algorithm that can maximize the robustness of edge services while minimizing the deployment cost of edge computing nodes under the constraints of QoS factors such as the delay of edge services. To address the above issues, firstly, the edge computing node placement problem was transformed into a minimum dominating set problem with constraints by building a model about computing nodes, user transmission delay, and robustness. Then, the concept of overlapping domination was proposed, so that the network robustness was measured on the basis of overlapping domination, and an edge computing node placement algorithm based on overlapping domination was designed, namely CHAIN (edge server plaCement algoritHm based on overlAp domINation). Simulation results show that CHAIN can reduce the system latency by 50.54% and 50.13% compared to the coverage oriented approximate algorithm and base station oriented random algorithm, respectively.

Table and Figures | Reference | Related Articles | Metrics
Quantum K-Means algorithm based on Hamming distance
Jing ZHONG, Chen LIN, Zhiwei SHENG, Shibin ZHANG
Journal of Computer Applications    2023, 43 (8): 2493-2498.   DOI: 10.11772/j.issn.1001-9081.2022091469
Abstract316)   HTML34)    PDF (1623KB)(422)       Save

The K-Means algorithms typically utilize Euclidean distance to calculate the similarity between data points when dealing with large-scale heterogeneous data. However, this method has problems of low efficiency and high computational complexity. Inspired by the significant advantage of Hamming distance in handling data similarity calculation, a Quantum K-Means Hamming (QKMH) algorithm was proposed to calculate similarity. First, the data was prepared and made into quantum state, and the quantum Hamming distance was used to calculate similarity between the points to be clustered and the K cluster centers. Then, the Grover’s minimum search algorithm was improved to find the cluster center closest to the points to be clustered. Finally, these steps were repeated until the designated number of iterations was reached or the clustering centers no longer changed. Based on the quantum simulation computing framework QisKit, the proposed algorithm was validated on the MNIST handwritten digit dataset and compared with various traditional and improved methods. Experimental results show that the F1 score of the QKMH algorithm is improved by 10 percentage points compared with that of the Manhattan distance-based quantum K-Means algorithm and by 4.6 percentage points compared with that of the latest optimized Euclidean distance-based quantum K-Means algorithm, and the time complexity of the QKMH algorithm is lower than those of the above comparison algorithms.

Table and Figures | Reference | Related Articles | Metrics
Instance segmentation algorithm based on Fastformer and self-supervised contrastive learning
Rong GAO, Jiawei SHEN, Xiongkai SHAO, Xinyun WU
Journal of Computer Applications    2023, 43 (4): 1062-1070.   DOI: 10.11772/j.issn.1001-9081.2022020270
Abstract356)   HTML16)    PDF (3193KB)(210)    PDF(mobile) (4595KB)(12)    Save

To address problems of low detection precision, coarse masks and weak generalization ability of the existing instance segmentation algorithms for occluded and blurred instances, an instance segmentation algorithm based on Fastformer and self-supervised contrastive learning was proposed. Firstly, in order to enhance the ability of algorithm to extract global information of feature maps, the Fastformer module based on additive attention was added after feature extraction network, and interrelationship between pixels in each layer of feature map was modeled deeply. Secondly, inspired by self-supervised learning, a self-supervised contrastive learning module was added to conduct self-supervised contrastive learning to instances in images to enhance the ability of algorithm to understand images, thereby improving segmentation results in environments with much noise interference. Experimental results show that the proposed algorithm has the mean Average Precision (mAP) improved by 3.1 and 2.5 percentage points respectively, compared to recently classical instance segmentation algorithm SOLOv2(Segmenting Objects by LOcations v2) on Cityscapes dataset and COCO2017 dataset. And a great balance is achieved between real-time performance and precision by the proposed algorithm, leading good robustness in segmentation instance of complex scenes.

Table and Figures | Reference | Related Articles | Metrics
Data trusted traceability method based on Merkle mountain range
Wei LIU, Cong ZHANG, Wei SHE, Xuan SONG, Zhao TIAN
Journal of Computer Applications    2022, 42 (9): 2765-2771.   DOI: 10.11772/j.issn.1001-9081.2021081369
Abstract315)   HTML8)    PDF (2701KB)(134)       Save

Concerning the problems of the high cost of massive data storage and low efficiency of data traceability verification in the Internet of Things (IoT) system, a data trusted traceability method based on Merkel Mountain Range (MMR), named MMRBCV (Merkle Mountain Range BlockChain Verification), was proposed. Firstly, Inter-Planetary File System (IPFS) was used to realize the storage of the IoT data. Secondly, the consortium blockchains and private blockchains were adopted to design a double-blockchain structure to realize reliable recording of the data flow process. Finally, based on the MMR, a block structure was constructed to realize the rapid verification of lightweight IoT nodes in the process of data traceability. Experimental results show that MMRBCV reduces the amount of data downloaded during data tracing, and the data verification time is related to the structure of MMR. When MMR forms a perfect binary tree, the data verification time is short. When the block height is 200 000, MMRBCV’s maximum verification time is about 10 ms, which is about 72% shorter than that of Simplified Payment Verification (SPV) (about 36 ms), indicating that the proposed method improves the verification efficiency effectively.

Table and Figures | Reference | Related Articles | Metrics
Popularity prediction method of Twitter topics based on evolution patterns
Weifan XIE, Yan GUO, Guangsheng KUANG, Zhihua YU, Yuanhai XUE, Huawei SHEN
Journal of Computer Applications    2022, 42 (11): 3364-3370.   DOI: 10.11772/j.issn.1001-9081.2022010045
Abstract382)   HTML13)    PDF (934KB)(207)       Save

A popularity prediction method of Twitter topics based on evolution patterns was proposed to address the problem that the differences between evolution patterns and the time?effectiveness of prediction were not taken into account in previous popularity prediction methods. Firstly, the K?SC (K?Spectral Centroid) algorithm was used to cluster the popularity sequences of a large number of historical topics, and 6 evolution patterns were obtained. Then, a Fully Connected Network (FCN) was trained as the prediction model by using historical topic data of each evolution pattern. Finally, in order to select the prediction model for the topic to be predicted, Amplitude?Alignment Dynamic Time Warping (AADTW) algorithm was proposed to calculate the similarity between the known popularity sequence of the topic to be predicted and each evolution pattern, and the prediction model of the evolution pattern with the highest similarity was selected to predict the popularity. In the task of predicting the popularity of the next 5 hours based on the known popularity of the first 20 hours, the Mean Absolute Percentage Error (MAPE) of the prediction results of the proposed method was reduced by 58.2% and 31.0% respectively, compared with those of the Auto?Regressive Integrated Moving Average (ARIMA) method and method using a single fully connected network. Experimental results show that the model group based on the evolution patterns can predict the popularity of Twitter topic more accurately than single model.

Table and Figures | Reference | Related Articles | Metrics
Valve identification method based on double detection
Wei SHE, Qian ZHENG, Zhao TIAN, Wei LIU, Yinghao LI
Journal of Computer Applications    2022, 42 (1): 273-279.   DOI: 10.11772/j.issn.1001-9081.2021020333
Abstract269)   HTML10)    PDF (2307KB)(110)       Save

Aiming at the problems that current valve identification methods in industry have high missed rate of overlapping targets, low detection precision, poor target encapsulation degree and inaccurate positioning of circle center, a valve identification method based on double detection was proposed. Firstly, data enhancement was used to expand the samples in a lightweight way. Then, Spatial Pyramid Pooling (SPP) and Path Aggregation Network (PAN) were added on the basis of deep convolutional network. At the same time, the anchor boxes were adjusted and the loss function was improved to extract the valve prediction boxes. Finally, the Circle Hough Transform (CHT) method was used to secondarily identify the valves in the prediction boxes to accurately identify the valve regions. The proposed method was compared with the original You Only Look Once (YOLO)v3, YOLOv4, and the traditional CHT methods, and the detection results were evaluated by jointly using precision, recall and coincidence degree. Experimental results show that the average precision and recall of the proposed method reaches 97.1% and 94.4% respectively, 2.9 percentage points and 1.8 percentage points higher than those of the original YOLOv3 method respectively. In addition, the proposed method improves the target encapsulation degree and location accuracy of target center. The proposed method has the Intersection Over Union (IOU) between the corrected frame and the real frame reached 0.95, which is 0.05 higher than that of the traditional CHT method. The proposed method improves the success rate of target capture while improving the accuracy of model identification, and has certain practical value in practical applications.

Table and Figures | Reference | Related Articles | Metrics
Privacy-preserving access control for mobile cloud services
JI Zhengbo BAI Guangwei SHEN Hang ZHANG Peng
Journal of Computer Applications    2014, 34 (7): 1897-1901.   DOI: 10.11772/j.issn.1001-9081.2014.07.1897
Abstract259)      PDF (800KB)(416)       Save

In response to the issue of security and privacy-preserving in mobile cloud computing, an anonymous mechanism using cloud storage was proposed. Zero-knowledge proofs and the digital signature technology were introduced into anonymous registration to simplify the steps of key authentication, building upon which the third party was used to bind users and their identity certificates that avoid legitimate cloud services for malicious purposes. The focus of data sharing is on how to take advantage of account parameters of sharers so as to solve the security issues due to secret key loss. Theoretical analysis shows that the proposed identity certificate and shared key generation schemes contribute to users' privacy.

Reference | Related Articles | Metrics
Energy-aware P2P data sharing mechanism for heterogeneous mobile terminals
DU Peng BAI Guangwei SHEN Hang CAO Lei
Journal of Computer Applications    2013, 33 (08): 2112-2116.  
Abstract901)      PDF (985KB)(576)       Save
In response to the issue of mobile terminal heterogeneity in the existing Peer-to-Peer (P2P) data sharing network, an energy-aware P2P data sharing mechanism in heterogeneous mobile terminal named EADS was proposed, which enabled terminals to determine the types of end-user devices with introducing an energy-aware module to predict residual energy of these end-user devices. On this basis, the mechanism adjusted data sharing strategy dynamically in accordance with the changes of network environment. The simulation results demonstrate that EADS achieves significant performance improvement, in terms of energy utilization efficiency, balance load, as well as data sharing time, thus enhancing the success rate of data distribution. On the premise of maintaining high availability of files, average energy consumption has been reduced up to 15%.
Reference | Related Articles | Metrics